This is an interactive notebook. You can run it locally or use the links below:
How to use Weave with Audio Data: An OpenAI Example
This demo uses the OpenAI chat completions API with GPT 4o Audio Preview to generate audio responses to text prompts and track these in Weave.

Setup
Start by installing the OpenAI (openai
) and Weave (weave
) dependencies, as well as API key management dependencey set-env
.
google.colab.userdata
. See: here for usage instructions.
Audio Streaming and Storage Example
Now we will setup a call to OpenAI’s completions endpoint with audio modality enabled. First create the OpenAI client and initiate a Weave project.prompt_endpont_and_log_trace
. This function has three primary steps:
-
We make a completion object using the
GPT 4o Audio Preview
model that supports text and audio inputs and outputs.- We prompt the model to count to 13 slowly with varying accents.
- We set the completion to “stream”.
- We open a new output file to which the streamed data is writen chunk by chunk.
- We return an open file handler to the audio file so Weave logs the audio data in the trace.
Testing
Run the following cell. The system and user prompt will be stored in a Weave trace as well as the output audio. After running the cell, click the link next to the ”🍩” emoji to view your trace.Advanced Usage: Realtime Audio API with Weave
